Photometric differences are widely used as supervision signals to train neural networks for estimating depth and camera pose from unlabeled monocular videos. However, this approach is detrimental for model optimization because occlusions and moving objects in a scene violate the underlying static scenario assumption. In addition, pixels in textureless regions or less discriminative pixels hinder model training. To solve these problems, in this paper, we deal with moving objects and occlusions utilizing the difference of the flow fields and depth structure generated by affine transformation and view synthesis, respectively. Secondly, we mitigate the effect of textureless regions on model optimization by measuring differences between features with more semantic and contextual information without adding networks. In addition, although the bidirectionality component is used in each sub-objective function, a pair of images are reasoned about only once, which helps reduce overhead. Extensive experiments and visual analysis demonstrate the effectiveness of the proposed method, which outperform existing state-of-the-art self-supervised methods under the same conditions and without introducing additional auxiliary information.
translated by 谷歌翻译
Large pretrained language models can easily produce toxic or biased content, which is prohibitive for practical use. In order to detect such toxic generations, existing methods rely on templates, real-world data extraction, crowdsourcing workers, or automatic generation to construct adversarial contexts that are likely to induce toxic generations. However, what type of context is more likely to induce unsafe responses is still under-explored. In this paper, we identify that context toxicity and context category (e.g., \textit{profanity}, \textit{insult}, \textit{drugs}, etc.) are two important factors to cause safety issues in response generation. Hence, we propose a method called \emph{reverse generation} to construct adversarial contexts conditioned on a given response, with the flexibility to control category, toxicity level, and inductivity of the generated contexts. Via reverse generation, we augment the existing BAD dataset and construct a new dataset BAD+ which contains more than 120K diverse and highly inductive contexts in 12 categories. We test three popular pretrained dialogue models (Blender, DialoGPT, and Plato2) and find that BAD+ can largely expose their safety problems. Furthermore, we show that BAD+ can greatly enhance the safety of generation and reveal the key factors of safety improvement. Our code and dataset is available at \url{https://github.com/thu-coai/Reverse_Generation}.
translated by 谷歌翻译
Passive millimeter-wave (PMMW) is a significant potential technique for human security screening. Several popular object detection networks have been used for PMMW images. However, restricted by the low resolution and high noise of PMMW images, PMMW hidden object detection based on deep learning usually suffers from low accuracy and low classification confidence. To tackle the above problems, this paper proposes a Task-Aligned Detection Transformer network, named PMMW-DETR. In the first stage, a Denoising Coarse-to-Fine Transformer (DCFT) backbone is designed to extract long- and short-range features in the different scales. In the second stage, we propose the Query Selection module to introduce learned spatial features into the network as prior knowledge, which enhances the semantic perception capability of the network. In the third stage, aiming to improve the classification performance, we perform a Task-Aligned Dual-Head block to decouple the classification and regression tasks. Based on our self-developed PMMW security screening dataset, experimental results including comparison with State-Of-The-Art (SOTA) methods and ablation study demonstrate that the PMMW-DETR obtains higher accuracy and classification confidence than previous works, and exhibits robustness to the PMMW images of low quality.
translated by 谷歌翻译
Solving math word problems is the task that analyses the relation of quantities and requires an accurate understanding of contextual natural language information. Recent studies show that current models rely on shallow heuristics to predict solutions and could be easily misled by small textual perturbations. To address this problem, we propose a Textual Enhanced Contrastive Learning framework, which enforces the models to distinguish semantically similar examples while holding different mathematical logic. We adopt a self-supervised manner strategy to enrich examples with subtle textual variance by textual reordering or problem re-construction. We then retrieve the hardest to differentiate samples from both equation and textual perspectives and guide the model to learn their representations. Experimental results show that our method achieves state-of-the-art on both widely used benchmark datasets and also exquisitely designed challenge datasets in English and Chinese. \footnote{Our code and data is available at \url{https://github.com/yiyunya/Textual_CL_MWP}
translated by 谷歌翻译
为了解决数学单词问题,人类学生利用达到不同方程解决方案的各种推理逻辑。但是,自动求解器的主流序列到序列方法旨在解码通过人类注释监督的固定溶液方程。在本文中,我们通过利用一组控制代码来指导模型考虑某些推理逻辑并解码从人类参考转换的相应方程式表达式来指导模型来考虑某些推理逻辑并解码相应的方程式表达式来提出一个受控方程生成求解器。经验结果表明,我们的方法普遍提高了单人(MATH23K)和多项(draw1k,hmwp)基准的性能,在具有挑战性的多重未知数据集上,高达13.2%的准确性。
translated by 谷歌翻译
变异量子算法(VQA)在NISQ时代表现出巨大的潜力。在VQA的工作流程中,Ansatz的参数迭代更新以近似所需的量子状态。我们已经看到了各种努力,以较少的大门起草更好的安萨兹。在量子计算机中,栅极Ansatz最终将转换为控制信号,例如TransMons上的微波脉冲。并且对照脉冲需要精心校准,以最大程度地减少误差(例如过度旋转和旋转)。在VQA的情况下,此过程将引入冗余,但是VQAS的变异性能自然可以通过更新幅度和频率参数来处理过度旋转和重组的问题。因此,我们提出了PAN,这是一种用于VQA的天然脉冲ANSATZ GENTARATOR框架。我们生成具有可训练参数用于振幅和频率的天然脉冲ansatz。在我们提出的锅中,我们正在调整参数脉冲,这些脉冲在NISQ计算机上得到了内在支持。考虑到本机 - 脉冲ANSATZ不符合参数迁移规则,我们需要部署非级别优化器。为了限制发送到优化器的参数数量,我们采用了一种生成本机 - 脉冲ANSATZ的渐进式方式。实验是在模拟器和量子设备上进行的,以验证我们的方法。当在NISQ机器上采用时,PAN获得的延迟平均提高了86%。 PAN在H2和HEH+上的VQE任务分别能够达到99.336%和96.482%的精度,即使NISQ机器中有很大的噪声。
translated by 谷歌翻译
可扩展的网络已经证明了它们在处理灾难性遗忘问题方面的优势。考虑到不同的任务可能需要不同的结构,最近的方法设计了通过复杂技能适应不同任务的动态结构。他们的例程是首先搜索可扩展的结构,然后训练新任务,但是,这将任务分为多个培训阶段,从而导致次优或过度计算成本。在本文中,我们提出了一个名为E2-AEN的端到端可训练的可自适应扩展网络,该网络动态生成了新任务的轻量级结构,而没有任何精确的先前任务下降。具体而言,该网络包含一个功能强大的功能适配器的序列,用于扩大以前学习的表示新任务的表示形式,并避免任务干扰。这些适配器是通过基于自适应门的修剪策略来控制的,该策略决定是否可以修剪扩展的结构,从而根据新任务的复杂性动态地改变网络结构。此外,我们引入了一种新颖的稀疏激活正则化,以鼓励模型学习具有有限参数的区分特征。 E2-aen可以降低成本,并且可以以端到端的方式建立在任何饲喂前架构上。关于分类(即CIFAR和VDD)和检测(即可可,VOC和ICCV2021 SSLAD挑战)的广泛实验证明了提出的方法的有效性,从而实现了新的出色结果。
translated by 谷歌翻译
最近,由于其广泛的商业价值,从视觉丰富的文档(例如门票和简历)中自动提取信息已成为一个热门而重要的研究主题。大多数现有方法将此任务分为两个小节:用于从原始文档图像中获取纯文本的文本阅读部分以及用于提取密钥内容的信息提取部分。这些方法主要集中于改进第二个方法,同时忽略了这两个部分高度相关。本文提出了一个统一的端到端信息提取框架,从视觉上富含文档中提出,文本阅读和信息提取可以通过精心设计的多模式上下文块相互加强。具体而言,文本阅读部分提供了多模式功能,例如视觉,文本和布局功能。开发了多模式上下文块,以融合生成的多模式特征,甚至是从预训练的语言模型中获得的先验知识,以提供更好的语义表示。信息提取部分负责使用融合上下文功能生成密钥内容。该框架可以以端到端的可训练方式进行培训,从而实现全球优化。更重要的是,我们将视觉丰富的文档定义为跨两个维度的四个类别,即布局和文本类型。对于每个文档类别,我们提供或推荐相应的基准,实验设置和强大的基准,以弥补该研究领域缺乏统一评估标准的问题。报告了对四种基准测试的广泛实验(从固定布局到可变布局,从完整的文本到半未结构化的文本),证明了所提出的方法的有效性。数据,源代码和模型可用。
translated by 谷歌翻译
甲状腺结节分类旨在根据给定的超声图像确定结节是良性还是恶性。但是,通过细胞学活检获得的标签是临床医学的黄金标准,并不总是与超声成像TI-RADS标准一致。两者之间的信息差异导致现有的基于深度学习的分类方法具有优柔寡断。为了解决不一致的标签问题,我们提出了一个自适应课程学习(ACL)框架,该框架可以自适应地发现并用不一致的标签丢弃样品。具体而言,ACL同时考虑了硬样品和模型确定性,并且可以准确确定用不一致的标签区分样品的阈值。此外,我们贡献了TNCD:甲状腺结节分类数据集,以促进对甲状腺结节的未来相关研究。基于三个不同的骨干网络的TNCD的广泛实验结果不仅证明了我们方法的优势,而且证明了较少的IS原理在战略上以不一致​​的标签抛弃样品可以产生性能提高。源代码和数据可从https://github.com/chenghui-666/acl/获得。
translated by 谷歌翻译
With the rapid development of artificial intelligence (AI) in medical image processing, deep learning in color fundus photography (CFP) analysis is also evolving. Although there are some open-source, labeled datasets of CFPs in the ophthalmology community, large-scale datasets for screening only have labels of disease categories, and datasets with annotations of fundus structures are usually small in size. In addition, labeling standards are not uniform across datasets, and there is no clear information on the acquisition device. Here we release a multi-annotation, multi-quality, and multi-device color fundus image dataset for glaucoma analysis on an original challenge -- Retinal Fundus Glaucoma Challenge 2nd Edition (REFUGE2). The REFUGE2 dataset contains 2000 color fundus images with annotations of glaucoma classification, optic disc/cup segmentation, as well as fovea localization. Meanwhile, the REFUGE2 challenge sets three sub-tasks of automatic glaucoma diagnosis and fundus structure analysis and provides an online evaluation framework. Based on the characteristics of multi-device and multi-quality data, some methods with strong generalizations are provided in the challenge to make the predictions more robust. This shows that REFUGE2 brings attention to the characteristics of real-world multi-domain data, bridging the gap between scientific research and clinical application.
translated by 谷歌翻译